8 research outputs found

    Spatial representation and visual impairement - Developmental trends and new technological tools for assessment and rehabilitation

    Get PDF
    It is well known that perception is mediated by the five sensory modalities (sight, hearing, touch, smell and taste), which allows us to explore the world and build a coherent spatio-temporal representation of the surrounding environment. Typically, our brain collects and integrates coherent information from all the senses to build a reliable spatial representation of the world. In this sense, perception emerges from the individual activity of distinct sensory modalities, operating as separate modules, but rather from multisensory integration processes. The interaction occurs whenever inputs from the senses are coherent in time and space (Eimer, 2004). Therefore, spatial perception emerges from the contribution of unisensory and multisensory information, with a predominant role of visual information for space processing during the first years of life. Despite a growing body of research indicates that visual experience is essential to develop spatial abilities, to date very little is known about the mechanisms underpinning spatial development when the visual input is impoverished (low vision) or missing (blindness). The thesis's main aim is to increase knowledge about the impact of visual deprivation on spatial development and consolidation and to evaluate the effects of novel technological systems to quantitatively improve perceptual and cognitive spatial abilities in case of visual impairments. Chapter 1 summarizes the main research findings related to the role of vision and multisensory experience on spatial development. Overall, such findings indicate that visual experience facilitates the acquisition of allocentric spatial capabilities, namely perceiving space according to a perspective different from our body. Therefore, it might be stated that the sense of sight allows a more comprehensive representation of spatial information since it is based on environmental landmarks that are independent of body perspective. Chapter 2 presents original studies carried out by me as a Ph.D. student to investigate the developmental mechanisms underpinning spatial development and compare the spatial performance of individuals with affected and typical visual experience, respectively visually impaired and sighted. Overall, these studies suggest that vision facilitates the spatial representation of the environment by conveying the most reliable spatial reference, i.e., allocentric coordinates. However, when visual feedback is permanently or temporarily absent, as in the case of congenital blindness or blindfolded individuals, respectively, compensatory mechanisms might support the refinement of haptic and auditory spatial coding abilities. The studies presented in this chapter will validate novel experimental paradigms to assess the role of haptic and auditory experience on spatial representation based on external (i.e., allocentric) frames of reference. Chapter 3 describes the validation process of new technological systems based on unisensory and multisensory stimulation, designed to rehabilitate spatial capabilities in case of visual impairment. Overall, the technological validation of new devices will provide the opportunity to develop an interactive platform to rehabilitate spatial impairments following visual deprivation. Finally, Chapter 4 summarizes the findings reported in the previous Chapters, focusing the attention on the consequences of visual impairment on the developmental of unisensory and multisensory spatial experience in visually impaired children and adults compared to sighted peers. It also wants to highlight the potential role of novel experimental tools to validate the use to assess spatial competencies in response to unisensory and multisensory events and train residual sensory modalities under a multisensory rehabilitation

    Allocentric spatial perception through vision and touch in sighted and blind children.

    Get PDF
    Abstract Vision and touch play a critical role in spatial development, facilitating the acquisition of allocentric and egocentric frames of reference, respectively. Previous works have shown that children's ability to adopt an allocentric frame of reference might be impaired by the absence of visual experience during growth. In the current work, we investigated whether visual deprivation also impairs the ability to shift from egocentric to allocentric frames of reference in a switching-perspective task performed in the visual and haptic domains. Children with and without visual impairments from 6 to 13 years of age were asked to visually (only sighted children) or haptically (blindfolded sighted children and blind children) explore and reproduce a spatial configuration of coins by assuming either an egocentric perspective or an allocentric perspective. Results indicated that temporary visual deprivation impaired the ability of blindfolded sighted children to switch from egocentric to allocentric perspective more in the haptic domain than in the visual domain. Moreover, results on visually impaired children indicated that blindness did not impair allocentric spatial coding in the haptic domain but rather affected the ability to rely on haptic egocentric cues in the switching-perspective task. Finally, our findings suggested that the total absence of vision might impair the development of an egocentric perspective in case of body midline-crossing targets

    Clinical assessment of the TechArm system on visually impaired and blind children during uni- and multi-sensory perception tasks

    Get PDF
    We developed the TechArm system as a novel technological tool intended for visual rehabilitation settings. The system is designed to provide a quantitative assessment of the stage of development of perceptual and functional skills that are normally vision-dependent, and to be integrated in customized training protocols. Indeed, the system can provide uni- and multisensory stimulation, allowing visually impaired people to train their capability of correctly interpreting non-visual cues from the environment. Importantly, the TechArm is suitable to be used by very young children, when the rehabilitative potential is maximal. In the present work, we validated the TechArm system on a pediatric population of low-vision, blind, and sighted children. In particular, four TechArm units were used to deliver uni- (audio or tactile) or multi-sensory stimulation (audio-tactile) on the participant's arm, and subject was asked to evaluate the number of active units. Results showed no significant difference among groups (normal or impaired vision). Overall, we observed the best performance in tactile condition, while auditory accuracy was around chance level. Also, we found that the audio-tactile condition is better than the audio condition alone, suggesting that multisensory stimulation is beneficial when perceptual accuracy and precision are low. Interestingly, we observed that for low-vision children the accuracy in audio condition improved proportionally to the severity of the visual impairment. Our findings confirmed the TechArm system's effectiveness in assessing perceptual competencies in sighted and visually impaired children, and its potential to be used to develop personalized rehabilitation programs for people with visual and sensory impairments

    Optimization of experimental audio-motor assessment of visually impaired people

    No full text
    Vision is crucial for the development of space perception and cognition. Blind people show an impairment in some auditory, motor and social skills. Early onset blindness also impacts on psychomotor, social and emotional development. Despite the huge improvement of technological devices, many of the solutions proposed to visually impaired people are not widely accepted by adults and not easily adaptable to children. The goal of this thesis work is to develop an optimized assessment system for visually impaired people who are involved in training sessions for the improvement of audio-motor abilities, in particular children under a rehabilitative program with ABBI (Audio Bracelet for Blind Interaction). Within the thesis we developed a new set-up to be used independently by rehabilitators without the supervision of the researchers. The new device is an all-in-one wireless and sensitized set of loudspeakers, easy to mount, controllable from a free, dedicated Android app, offering to the therapist a complete testing kit for a periodically and short-lasting evaluation of children’s response to the training

    Multisensory training improves the development of spatial cognition after sight restoration from congenital cataracts

    No full text
    Summary: Spatial cognition and mobility are typically impaired in congenitally blind individuals, as vision usually calibrates space perception by providing the most accurate distal spatial cues. We have previously shown that sight restoration from congenital bilateral cataracts guides the development of more accurate space perception, even when cataract removal occurs years after birth. However, late cataract-treated individuals do not usually reach the performance levels of the typically sighted population. Here, we developed a brief multisensory training that associated audiovisual feedback with body movements. Late cataract-treated participants quickly improved their space representation and mobility, performing as well as typically sighted controls in most tasks. Their improvement was comparable with that of a group of blind participants, who underwent training coupling their movements with auditory feedback alone. These findings suggest that spatial cognition can be enhanced by a training program that strengthens the association between bodily movements and their sensory feedback (either auditory or audiovisual)

    Feasibility of audio-motor training with the multisensory device ABBI: Implementation in a child with hemiplegia and hemianopia

    No full text
    Spatial representation is crucial when it comes to everyday interaction with the environment. Different factors influence spatial perception, such as body movements and vision. Accordingly, training strategies that exploit the plasticity of the human brain should be adopted early. In the current study we developed and tested a new training protocol based on the reinforcement of audio-motor associations. It supports spatial development in one hemiplegic child with an important visual field defect (hemianopia) in the same side of the hemiplegic limb. We focused on investigating whether a better representation of the space using the sound can also improve the involvement of the hemiplegic upper limb in daily life activity. The experimental training consists of intensive but entertaining rehabilitation for two weeks, during which a child performed ad-hoc developed audio-motor-spatial exercises with the Audio Bracelet for Blind Interaction (ABBI) for 2 h/day. We administered a battery of tests before and after the training that indicated that the child significantly improved in both the spatial aspects and the involvement of the hemiplegic limb in bimanual tasks. During the assessment, ActiGraph GT3X+ was used to measure asymmetry in the use of the two upper limbs with a standardized clinical tool, the Assisting Hand Assessment (AHA), pre and post-training. Additionally, the study measured and recorded spontaneous daily life activity for at least 2 h/day. These results confirm that one can enhance perceptual development in motor and visual disorders using naturally associated auditory feedback to body movements

    Visual Function and Neuropsychological Profile in Children with Cerebral Visual Impairment

    No full text
    Cerebral Visual Impairment (CVI) has become the leading cause of children’s visual impairment in developed countries. Since CVI may negatively affect neuropsychomotor development, an early diagnosis and characterization become fundamental to define effective habilitation approaches. To date, there is a lack of standardized diagnostic methods to assess CVI in children, and the role of visual functions in children’s neuropsychological profiles has been poorly investigated. In the present paper, we aim to describe the clinical and neuropsychological profiles and to investigate the possible effects of visual functions on neuropsychological performance of a cohort of children diagnosed with CVI. Fifty-one children with CVI were included in our retrospective analysis (inclusion criteria: verbal IQ > 70 in Wechsler scales; absence of significant ocular involvement). For each participant, we collected data on neuropsychological assessment (i.e., cognitive, cognitive visual, and learning abilities), basic visual functions (e.g., Best Corrected Visual Acuity—BCVA, contrast sensitivity, and ocular motor abilities) and global development features (e.g., neurological signs and motor development delay) based on standardized tests, according to patients’ ages. The results showed that oculomotor dysfunction involving saccades and smooth pursuit may be a core symptom of CVI and might have a significant impact on cognitive visual and other neuropsychological abilities. Furthermore, visual acuity and contrast sensitivity may influence cognitive, cognitive visual, and academic performances. Our findings suggest the importance of a comprehensive assessment of both visual and neuropsychological functions in children when CVI is suspected, which is needed to provide a more comprehensive functional profile and define the best habilitation strategy to sustain functional vision
    corecore